本文研究了比特币价格的线性模型,包括基于比特币货币统计数据的回归特征,采矿过程,谷歌搜索趋势,维基百科页面访问。与价格时间序列相比,来自实际价格的回归模型预测的偏差模式更加简单。假设可以通过经验丰富的专家预测该模式。以这种方式,使用回归模型和专家校正的组合,可以获得比任何回归模型或专家意见的更好的结果。结果表明,贝叶斯方法使得可以利用具有脂肪尾部的分布的概率方法,并考虑比特币价格时间序列中的异常值。
translated by 谷歌翻译
本文介绍了深度Q学习模型在销售时间序列分析问题中的使用。与使用历史数据的一种被动学习的监督机器学习相比,Q-Learning是一种积极的学习,目标是通过最佳的行动序列来最大化奖励。在工作中考虑了用于最佳定价策略和供需问题的免费Q学习方法。该研究的主要思想是表明,在时间序列分析中使用深度Q学习方法,可以通过使用参数模型和中建模学习代理交互的环境来最大化奖励功能来优化动作序列。使用基于历史数据的模型的情况。在定价优化案例研究环境中,使用销售依赖性对额外价格和随机模拟需求进行建模。在定价优化案例研究中,使用销售依赖性对额外价格和随机模拟需求进行建模。在供需案例研究中,建议使用环境建模的历史需求时间序列,代理商国家由促销行动,以前的需求值和每周季节性特征代表。获得的结果表明,使用深度Q学习,我们可以优化价格优化和供需问题的决策过程。使用参数模型和历史数据的环境建模可用于学习代理的冷启动。在下一个步骤,冷启动后,培训的代理可用于真正的商业环境。
translated by 谷歌翻译
本文介绍了用于形成推文数据集的不同预测特征的方法,并在预测分析中使用它们进行决策支持。图表理论以及频繁的项目集和关联规则理论用于形成和检索来自这些Datesests的不同特征。这些方法的使用使得可以在与指定实体相关的推文中揭示语义结构。结果表明,语义频繁项目集的定量特性可以用于具有指定目标变量的预测性回归模型。
translated by 谷歌翻译
本文介绍了贝叶斯回归对建筑时间序列模型的使用和堆叠不同预测模型的时间序列。分析了利用贝叶斯回归与非线性趋势的时间序列建模。这种方法使得可以估计时间序列预测的不确定性并计算风险特征的价值。考虑了使用贝叶斯回归的时间序列的分层模型。在这种方法中,对于所有数据样本,一组参数是相同的,对于不同的数据样本,其他参数可以不同。这样的方法允许在指定时间序列的短期数据的情况下使用该模型,例如,在销售预测问题的新商店或新产品的情况下。在预测模型堆叠的研究中,模型Arima,神经网络,随机森林,额外的树用于对第一级模型集合的预测。在第二级,验证集上这些模型的时间序列预测用于贝叶斯回归堆叠。这种方法给出了这些模型的回归系数的分布。它可以估计每个模型对堆叠结果贡献的不确定性。有关这些分布的信息允许我们选择最佳的堆叠模型集,同时考虑到域知识。堆叠预测模型的概率方法使我们能够对决策过程中重要的预测进行风险评估。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译
We derive a set of causal deep neural networks whose architectures are a consequence of tensor (multilinear) factor analysis. Forward causal questions are addressed with a neural network architecture composed of causal capsules and a tensor transformer. The former estimate a set of latent variables that represent the causal factors, and the latter governs their interaction. Causal capsules and tensor transformers may be implemented using shallow autoencoders, but for a scalable architecture we employ block algebra and derive a deep neural network composed of a hierarchy of autoencoders. An interleaved kernel hierarchy preprocesses the data resulting in a hierarchy of kernel tensor factor models. Inverse causal questions are addressed with a neural network that implements multilinear projection and estimates the causes of effects. As an alternative to aggressive bottleneck dimension reduction or regularized regression that may camouflage an inherently underdetermined inverse problem, we prescribe modeling different aspects of the mechanism of data formation with piecewise tensor models whose multilinear projections are well-defined and produce multiple candidate solutions. Our forward and inverse neural network architectures are suitable for asynchronous parallel computation.
translated by 谷歌翻译